Stochastic mutual information gradient estimation for dimensionality reduction networks

نویسندگان
چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Stochastic Variance Reduction for Policy Gradient Estimation

Recent advances in policy gradient methods and deep learning have demonstrated their applicability for complex reinforcement learning problems. However, the variance of the performance gradient estimates obtained from the simulation is often excessive, leading to poor sample efficiency. In this paper, we apply the stochastic variance reduced gradient descent (SVRG) technique [1] to model-free p...

متن کامل

Constrained Maximum Mutual Information Dimensionality Reduction for Language Identification

In this paper we propose Constrained Maximum Mutual Information dimensionality reduction (CMMI), an informationtheoretic based dimensionality reduction technique. CMMI tries to maximize the mutual information between the class labels and the projected (lower dimensional) features, optimized via gradient ascent. Supervised and semi-supervised CMMI are introduced and compared with a state of the ...

متن کامل

Dimensionality reduction based on non-parametric mutual information

In this paper we introduce a supervised linear dimensionality reduction algorithm which finds a projected input space that maximizes the mutual information between input and output values. The algorithm utilizes the recently introduced MeanNN estimator for differential entropy. We show that the estimator is an appropriate tool for the dimensionality reduction task. Next we provide a nonlinear r...

متن کامل

Information Preserving Dimensionality Reduction

Dimensionality reduction is a very common preprocessing approach in many machine learning tasks. The goal is to design data representations that on one hand reduce the dimension of the data (therefore allowing faster processing), and on the other hand aim to retain as much task-relevant information as possible. We look at generic dimensionality reduction approaches that do not rely on much task...

متن کامل

An eigenvalue-problem formulation for non-parametric mutual information maximisation for linear dimensionality reduction

Well-known dimensionality reduction (feature extraction) techniques, such as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA), are formulated as eigenvalue-problems, where the required features are eigenvectors of some objective matrix. Eigenvalueproblems are theoretically elegant, and have advantages over iterative algorithms. In contrast to iterative algorithms, they ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Information Sciences

سال: 2021

ISSN: 0020-0255

DOI: 10.1016/j.ins.2021.04.066